The computational complexity of the self-attention mechanism in Transformer models significantly limits their ability to generalize over long temporal durations. Memory-augmentation, or the explicit storing of past information in external memory for subsequent predictions, has become a constructive avenue for mitigating this limitation. We argue that memory-augmented Transformers can benefit substantially from considering insights from the memory literature in humans. We detail an approach for integrating evidence from the human memory system through the specification of cross-domain linking hypotheses. We then provide an empirical demonstration to evaluate the use of surprisal as a linking hypothesis, and further identify the limitations of this approach to inform future research.
translated by 谷歌翻译
Regularising the parameter matrices of neural networks is ubiquitous in training deep models. Typical regularisation approaches suggest initialising weights using small random values, and to penalise weights to promote sparsity. However, these widely used techniques may be less effective in certain scenarios. Here, we study the Koopman autoencoder model which includes an encoder, a Koopman operator layer, and a decoder. These models have been designed and dedicated to tackle physics-related problems with interpretable dynamics and an ability to incorporate physics-related constraints. However, the majority of existing work employs standard regularisation practices. In our work, we take a step toward augmenting Koopman autoencoders with initialisation and penalty schemes tailored for physics-related settings. Specifically, we propose the "eigeninit" initialisation scheme that samples initial Koopman operators from specific eigenvalue distributions. In addition, we suggest the "eigenloss" penalty scheme that penalises the eigenvalues of the Koopman operator during training. We demonstrate the utility of these schemes on two synthetic data sets: a driven pendulum and flow past a cylinder; and two real-world problems: ocean surface temperatures and cyclone wind fields. We find on these datasets that eigenloss and eigeninit improves the convergence rate by up to a factor of 5, and that they reduce the cumulative long-term prediction error by up to a factor of 3. Such a finding points to the utility of incorporating similar schemes as an inductive bias in other physics-related deep learning approaches.
translated by 谷歌翻译
For applications that require processing large amounts of text at inference time, Large Language Models (LLMs) are handicapped by their limited context windows, which are typically 2048 tokens. In-context learning, an emergent phenomenon in LLMs in sizes above a certain parameter threshold, constitutes one significant example because it can only leverage training examples that fit into the context window. Existing efforts to address the context window limitation involve training specialized architectures, which tend to be smaller than the sizes in which in-context learning manifests due to the memory footprint of processing long texts. We present Parallel Context Windows (PCW), a method that alleviates the context window restriction for any off-the-shelf LLM without further training. The key to the approach is to carve a long context into chunks (``windows'') that fit within the architecture, restrict the attention mechanism to apply only within each window, and re-use the positional embeddings among the windows. We test the PCW approach on in-context learning with models that range in size between 750 million and 178 billion parameters, and show substantial improvements for tasks with diverse input and output spaces. Our results motivate further investigation of Parallel Context Windows as a method for applying off-the-shelf LLMs in other settings that require long text sequences.
translated by 谷歌翻译
We introduce a new benchmark dataset, Placenta, for node classification in an underexplored domain: predicting microanatomical tissue structures from cell graphs in placenta histology whole slide images. This problem is uniquely challenging for graph learning for a few reasons. Cell graphs are large (>1 million nodes per image), node features are varied (64-dimensions of 11 types of cells), class labels are imbalanced (9 classes ranging from 0.21% of the data to 40.0%), and cellular communities cluster into heterogeneously distributed tissues of widely varying sizes (from 11 nodes to 44,671 nodes for a single structure). Here, we release a dataset consisting of two cell graphs from two placenta histology images totalling 2,395,747 nodes, 799,745 of which have ground truth labels. We present inductive benchmark results for 7 scalable models and show how the unique qualities of cell graphs can help drive the development of novel graph neural network architectures.
translated by 谷歌翻译
Question answering models commonly have access to two sources of "knowledge" during inference time: (1) parametric knowledge - the factual knowledge encoded in the model weights, and (2) contextual knowledge - external knowledge (e.g., a Wikipedia passage) given to the model to generate a grounded answer. Having these two sources of knowledge entangled together is a core issue for generative QA models as it is unclear whether the answer stems from the given non-parametric knowledge or not. This unclarity has implications on issues of trust, interpretability and factuality. In this work, we propose a new paradigm in which QA models are trained to disentangle the two sources of knowledge. Using counterfactual data augmentation, we introduce a model that predicts two answers for a given question: one based on given contextual knowledge and one based on parametric knowledge. Our experiments on the Natural Questions dataset show that this approach improves the performance of QA models by making them more robust to knowledge conflicts between the two knowledge sources, while generating useful disentangled answers.
translated by 谷歌翻译
The task of topical segmentation is well studied, but previous work has mostly addressed it in the context of structured, well-defined segments, such as segmentation into paragraphs, chapters, or segmenting text that originated from multiple sources. We tackle the task of segmenting running (spoken) narratives, which poses hitherto unaddressed challenges. As a test case, we address Holocaust survivor testimonies, given in English. Other than the importance of studying these testimonies for Holocaust research, we argue that they provide an interesting test case for topical segmentation, due to their unstructured surface level, relative abundance (tens of thousands of such testimonies were collected), and the relatively confined domain that they cover. We hypothesize that boundary points between segments correspond to low mutual information between the sentences proceeding and following the boundary. Based on this hypothesis, we explore a range of algorithmic approaches to the task, building on previous work on segmentation that uses generative Bayesian modeling and state-of-the-art neural machinery. Compared to manually annotated references, we find that the developed approaches show considerable improvements over previous work.
translated by 谷歌翻译
分布式形态框架的支持者提出了两个形态形成的两个层面:一个较低的单词形成,导致输入输出语义关系松散;和一个高层,导致了紧密的输入输出语义关系。在这项工作中,我们建议在希伯来语单词嵌入的背景下测试该假设的有效性。如果两个级别的假设得到了证实,我们期望最先进的希伯来语单词嵌入将编码(1)名词,(2)从其衍生而来(通过上级操作)和(3)和(3 )与名词相关的动词(通过名词根部的低级操作),以使得(2)在嵌入空间中应比相关动词(3)更接近名词(1)。是相同的名词(1)。我们报告说,这一假设通过希伯来语的四个嵌入模型来验证:FastText,Glove,Word2Vec和Alephbert。这表明单词嵌入模型能够捕获出于形态学动机的复杂而细粒的语义属性。
translated by 谷歌翻译
深度学习的成功以巨大的计算和能源成本,而训练大规模过度参数的神经网络的可伸缩性正成为AI进步的真正障碍。尽管传统反向传播通过梯度不错的传统反向传播的流行和低成本,但在理论和实践中,SGD在非凸面设置中具有高度的收敛速度。为了减轻这一成本,最近的工作提议采用替代性(牛顿型)培训方法,但收敛速度更快,尽管其每题成本更高。对于具有$ m = \ mathrm {poly}(n)$参数的典型神经网络,$ n $ datapoints in $ \ mathbb {r}^d $ of $ n $ datapoints的输入批次, Weinstein,ITCS'2021]需要$ \ sim mnd + n^3 $每次迭代。在本文中,我们提出了一种新颖的培训方法,它仅需要$ m^{1- \ alpha} n d + n^3 $摊销时间在同一过度叠加机制中,其中$ \ alpha \ in(0.01,1)$是某些固定常数。此方法依赖于神经网络的新替代视图,作为一组二进制搜索树,每个迭代都对应于修改树中节点的一小部分。我们认为,这种观点将在DNN的设计和分析中进一步应用。
translated by 谷歌翻译
保存隐私的神经网络(NN)推理解决方案最近在几种提供不同的延迟带宽权衡的解决方案方面获得了重大吸引力。其中,许多人依靠同态加密(HE),这是一种对加密数据进行计算的方法。但是,与他们的明文对应物相比,他的操作即使是最先进的计划仍然很慢。修剪NN模型的参数是改善推理潜伏期的众所周知的方法。但是,在明文上下文中有用的修剪方法可能对HE案的改善几乎可以忽略不计,这在最近的工作中也证明了这一点。在这项工作中,我们提出了一套新颖的修剪方法,以减少潜伏期和记忆要求,从而将明文修剪方法的有效性带到HE中。至关重要的是,我们的建议采用两种关键技术,即。堆积模型权重的置换和扩展,使修剪能够明显更多的密封性下文并分别恢复大部分精度损失。我们证明了我们的方法在完全连接的层上的优势,其中使用最近提出的称为瓷砖张量的包装技术填充了权重,该技术允许在非相互作用模式下执行Deep NN推断。我们在各种自动编码器架构上评估了我们的方法,并证明,对于MNIST上的小均值重建损失为1.5*10^{ - 5},我们将HE-SEAMABLE推断的内存要求和延迟减少了60%。
translated by 谷歌翻译
现代生成模型大致分为两个主要类别:(1)可以产生高质量随机样品但无法估算新数据点的确切密度的模型,以及(2)提供精确密度估计的模型,以样本为代价潜在空间的质量和紧凑性。在这项工作中,我们提出了LED,这是一种与gan密切相关的新生成模型,不仅允许有效采样,而且允许有效的密度估计。通过最大程度地提高对数可能的歧视器输出,我们得出了一个替代对抗优化目标,鼓励生成的数据多样性。这种表述提供了对几种流行生成模型之间关系的见解。此外,我们构建了一个基于流的生成器,该发电机可以计算生成样品的精确概率,同时允许低维度变量作为输入。我们在各种数据集上的实验结果表明,我们的密度估计器会产生准确的估计值,同时保留了生成的样品质量良好。
translated by 谷歌翻译